15 research outputs found
Advancing Parsimonious Deep Learning Weather Prediction using the HEALPix Mesh
We present a parsimonious deep learning weather prediction model on the
Hierarchical Equal Area isoLatitude Pixelization (HEALPix) to forecast seven
atmospheric variables for arbitrarily long lead times on a global approximately
110 km mesh at 3h time resolution. In comparison to state-of-the-art machine
learning weather forecast models, such as Pangu-Weather and GraphCast, our
DLWP-HPX model uses coarser resolution and far fewer prognostic variables. Yet,
at one-week lead times its skill is only about one day behind the
state-of-the-art numerical weather prediction model from the European Centre
for Medium-Range Weather Forecasts. We report successive forecast improvements
resulting from model design and data-related decisions, such as switching from
the cubed sphere to the HEALPix mesh, inverting the channel depth of the U-Net,
and introducing gated recurrent units (GRU) on each level of the U-Net
hierarchy. The consistent east-west orientation of all cells on the HEALPix
mesh facilitates the development of location-invariant convolution kernels that
are successfully applied to propagate global weather patterns across our
planet. Without any loss of spectral power after two days, the model can be
unrolled autoregressively for hundreds of steps into the future to generate
stable and realistic states of the atmosphere that respect seasonal trends, as
showcased in one-year simulations. Our parsimonious DLWP-HPX model is
research-friendly and potentially well-suited for sub-seasonal and seasonal
forecasting
Inductive biases in deep learning models for weather prediction
Deep learning has recently gained immense popularity in the Earth sciences as
it enables us to formulate purely data-driven models of complex Earth system
processes. Deep learning-based weather prediction (DLWP) models have made
significant progress in the last few years, achieving forecast skills
comparable to established numerical weather prediction (NWP) models with
comparatively lesser computational costs. In order to train accurate, reliable,
and tractable DLWP models with several millions of parameters, the model design
needs to incorporate suitable inductive biases that encode structural
assumptions about the data and modelled processes. When chosen appropriately,
these biases enable faster learning and better generalisation to unseen data.
Although inductive biases play a crucial role in successful DLWP models, they
are often not stated explicitly and how they contribute to model performance
remains unclear. Here, we review and analyse the inductive biases of six
state-of-the-art DLWP models, involving a deeper look at five key design
elements: input data, forecasting objective, loss components, layered design of
the deep learning architectures, and optimisation methods. We show how the
design choices made in each of the five design elements relate to structural
assumptions. Given recent developments in the broader DL community, we
anticipate that the future of DLWP will likely see a wider use of foundation
models -- large models pre-trained on big databases with self-supervised
learning -- combined with explicit physics-informed inductive biases that allow
the models to provide competitive forecasts even at the more challenging
subseasonal-to-seasonal scales